使用视频指定任务是获取新颖和一般机器人技能的强大技术。然而,推理机械和灵巧的互动可以使其挑战规模学习接触的操纵。在这项工作中,我们专注于视觉非预先展示平面操作的问题:给定平面运动中对象的视频,找到再现相同对象运动的联系人感知机器人动作。我们提出了一种新颖的架构,可微分的操纵(\我们)的学习,它通过利用可微分优化和基于有限差分的模拟来将视频解码与接触机械的前沿的视频解码神经模型结合在一起。通过广泛的模拟实验,研究了基于模型的技术与现代深度学习方法之间的相互作用。我们发现,我们的模块化和完全可差的架构比看不见的对象和运动的学习方法更好。 \ url {https://github.com/baceituno/dlm}。
translated by 谷歌翻译
Matrix factorization exploits the idea that, in complex high-dimensional data, the actual signal typically lies in lower-dimensional structures. These lower dimensional objects provide useful insight, with interpretability favored by sparse structures. Sparsity, in addition, is beneficial in terms of regularization and, thus, to avoid over-fitting. By exploiting Bayesian shrinkage priors, we devise a computationally convenient approach for high-dimensional matrix factorization. The dependence between row and column entities is modeled by inducing flexible sparse patterns within factors. The availability of external information is accounted for in such a way that structures are allowed while not imposed. Inspired by boosting algorithms, we pair the the proposed approach with a numerical strategy relying on a sequential inclusion and estimation of low-rank contributions, with data-driven stopping rule. Practical advantages of the proposed approach are demonstrated by means of a simulation study and the analysis of soccer heatmaps obtained from new generation tracking data.
translated by 谷歌翻译
The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system's dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
translated by 谷歌翻译
Shape displays are a class of haptic devices that enable whole-hand haptic exploration of 3D surfaces. However, their scalability is limited by the mechanical complexity and high cost of traditional actuator arrays. In this paper, we propose using electroadhesive auxetic skins as a strain-limiting layer to create programmable shape change in a continuous ("formable crust") shape display. Auxetic skins are manufactured as flexible printed circuit boards with dielectric-laminated electrodes on each auxetic unit cell (AUC), using monolithic fabrication to lower cost and assembly time. By layering multiple sheets and applying a voltage between electrodes on subsequent layers, electroadhesion locks individual AUCs, achieving a maximum in-plane stiffness variation of 7.6x with a power consumption of 50 uW/AUC. We first characterize an individual AUC and compare results to a kinematic model. We then validate the ability of a 5x5 AUC array to actively modify its own axial and transverse stiffness. Finally, we demonstrate this array in a continuous shape display as a strain-limiting skin to programmatically modulate the shape output of an inflatable LDPE pouch. Integrating electroadhesion with auxetics enables new capabilities for scalable, low-profile, and low-power control of flexible robotic systems.
translated by 谷歌翻译
Persistent homology, a powerful mathematical tool for data analysis, summarizes the shape of data through tracking topological features across changes in different scales. Classical algorithms for persistent homology are often constrained by running times and memory requirements that grow exponentially on the number of data points. To surpass this problem, two quantum algorithms of persistent homology have been developed based on two different approaches. However, both of these quantum algorithms consider a data set in the form of a point cloud, which can be restrictive considering that many data sets come in the form of time series. In this paper, we alleviate this issue by establishing a quantum Takens's delay embedding algorithm, which turns a time series into a point cloud by considering a pertinent embedding into a higher dimensional space. Having this quantum transformation of time series to point clouds, then one may use a quantum persistent homology algorithm to extract the topological features from the point cloud associated with the original times series.
translated by 谷歌翻译
Only increasing accuracy without considering uncertainty may negatively impact Deep Neural Network (DNN) decision-making and decrease its reliability. This paper proposes five combined preprocessing and post-processing methods for time-series binary classification problems that simultaneously increase the accuracy and reliability of DNN outputs applied in a 5G UAV security dataset. These techniques use DNN outputs as input parameters and process them in different ways. Two methods use a well-known Machine Learning (ML) algorithm as a complement, and the other three use only confidence values that the DNN estimates. We compare seven different metrics, such as the Expected Calibration Error (ECE), Maximum Calibration Error (MCE), Mean Confidence (MC), Mean Accuracy (MA), Normalized Negative Log Likelihood (NLL), Brier Score Loss (BSL), and Reliability Score (RS) and the tradeoffs between them to evaluate the proposed hybrid algorithms. First, we show that the eXtreme Gradient Boosting (XGB) classifier might not be reliable for binary classification under the conditions this work presents. Second, we demonstrate that at least one of the potential methods can achieve better results than the classification in the DNN softmax layer. Finally, we show that the prospective methods may improve accuracy and reliability with better uncertainty calibration based on the assumption that the RS determines the difference between MC and MA metrics, and this difference should be zero to increase reliability. For example, Method 3 presents the best RS of 0.65 even when compared to the XGB classifier, which achieves RS of 7.22.
translated by 谷歌翻译
人工神经网络的扩展不断增加,在超功率边缘设备上不会停止。但是,这些通常具有很高的计算需求,并且需要专门的硬件加速器,以确保设计达到功率和性能限制。神经网络的手动优化以及相应的硬件加速器可能非常具有挑战性。本文介绍了Hannah(硬件加速器和神经网络搜索),该框架是针对深神经网络和硬件加速器的自动化和组合的硬件/软件共同设计,用于资源和功率受限的边缘设备。优化方法使用基于进化的搜索算法,一种神经网络模板技术以及可配置的Ultratrail硬件加速器模板的分析KPI模型,以找到优化的神经网络和加速器配置。我们证明,汉娜(Hannah)可以找到适合不同音频分类任务的功耗和高精度的合适神经网络,例如单级唤醒单词检测,多级关键字检测和语音活动检测,这些操作优于相关工作。
translated by 谷歌翻译
DatalOgMTL是与公制临时运算符的DataLog扩展程序,该临时操作员在基于时间本体的数据访问和查询答案以及流推理中找到了应用程序。DatalOgMTL的实用算法依赖于基于实质化的推理,在这些推理中,在连续的规则应用程序中以前向链接方式得出时间事实。但是,基于当前的基于物质化的程序是基于幼稚的评估策略,其中主要效率的主要来源源于冗余计算。在本文中,我们提出了一个基于物质化的过程,该过程类似于数据编号中的经典半算法,旨在通过确保在执行算法期间最多一次考虑每一个时间规则实例,以最大程度地减少冗余计算。我们的实验表明,我们针对DatalOgMTL的优化半策略能够显着减少实质化时间。
translated by 谷歌翻译